10 research outputs found
Breaking Sticks and Ambiguities with Adaptive Skip-gram
Recently proposed Skip-gram model is a powerful method for learning
high-dimensional word representations that capture rich semantic relationships
between words. However, Skip-gram as well as most prior work on learning word
representations does not take into account word ambiguity and maintain only
single representation per word. Although a number of Skip-gram modifications
were proposed to overcome this limitation and learn multi-prototype word
representations, they either require a known number of word meanings or learn
them using greedy heuristic approaches. In this paper we propose the Adaptive
Skip-gram model which is a nonparametric Bayesian extension of Skip-gram
capable to automatically learn the required number of representations for all
words at desired semantic resolution. We derive efficient online variational
learning algorithm for the model and empirically demonstrate its efficiency on
word-sense induction task
Equilibrium Aggregation: Encoding Sets via Optimization
Processing sets or other unordered, potentially variable-sized inputs in
neural networks is usually handled by aggregating a number of input tensors
into a single representation. While a number of aggregation methods already
exist from simple sum pooling to multi-head attention, they are limited in
their representational power both from theoretical and empirical perspectives.
On the search of a principally more powerful aggregation strategy, we propose
an optimization-based method called Equilibrium Aggregation. We show that many
existing aggregation methods can be recovered as special cases of Equilibrium
Aggregation and that it is provably more efficient in some important cases.
Equilibrium Aggregation can be used as a drop-in replacement in many existing
architectures and applications. We validate its efficiency on three different
tasks: median estimation, class counting, and molecular property prediction. In
all experiments, Equilibrium Aggregation achieves higher performance than the
other aggregation techniques we test.Comment: Published at UAI 202